AI Agents Now Capable of Real Smart Contract Hacks, Anthropic Warns
Advanced AI agents can now identify vulnerabilities in smart contracts and execute real-world attacks, according to a December 2 revelation by Anthropic. The study raises serious concerns about AI's accelerating risks to blockchain security.
Anthropic tested AI models on historical hacks, uncovering exploits worth $550 million. The benchmark, SCONE-bench, included 405 exploited contracts from 2020-2025. Ten models successfully attacked 51% of them, demonstrating unprecedented capabilities in vulnerability detection.
On October 3, researchers evaluated AI systems that discovered zero-day vulnerabilities. These flaws were exploited to extract cryptocurrencies profitably, with attack costs significantly lower than the gains. The findings mark a tipping point in autonomous AI-driven exploits.